多跳的推理需要汇总多个文档来回答一个复杂的问题。现有方法通常将多跳问题分解为更简单的单跳问题,以解决说明可解释的推理过程的问题。但是,他们忽略了每个推理步骤的支持事实的基础,这往往会产生不准确的分解。在本文中,我们提出了一个可解释的逐步推理框架,以在每个中间步骤中同时合并单跳支持句子识别和单跳问题生成,并利用当前跳跃的推断,直到推理最终结果。我们采用统一的读者模型来进行中级跳跃推理和最终的跳跃推理,并采用关节优化,以更准确,强大的多跳上推理。我们在两个基准数据集HOTPOTQA和2WIKIMULTIHOPQA上进行实验。结果表明,我们的方法可以有效地提高性能,并在不分解监督的情况下产生更好的解释推理过程。
translated by 谷歌翻译
多模式的预训练和知识发现是多模式机器学习中的两个重要研究主题。然而,没有现有的作品试图将知识发现与知识指导的多模式预训练联系起来。在本文中,我们建议将它们统一成一个连续的学习框架以进行相互改进。以图像和文本的开放域单模式数据集为输入,我们将知识图作为支持这两个任务的基础。对于知识发现,使用预训练的模型来识别图表上的跨模式链接。对于模型预训练,将知识图用作指导模型更新的外部知识。这两个步骤是在我们的持续学习框架中迭代执行的。关于知识发现和预训练模型,MS-Coco和FlickR30K的实验结果验证了我们框架的有效性。
translated by 谷歌翻译
Diagnosis-oriented dialogue system queries the patient's health condition and makes predictions about possible diseases through continuous interaction with the patient. A few studies use reinforcement learning (RL) to learn the optimal policy from the joint action space of symptoms and diseases. However, existing RL (or Non-RL) methods cannot achieve sufficiently good prediction accuracy, still far from its upper limit. To address the problem, we propose a decoupled automatic diagnostic framework DxFormer, which divides the diagnosis process into two steps: symptom inquiry and disease diagnosis, where the transition from symptom inquiry to disease diagnosis is explicitly determined by the stopping criteria. In DxFormer, we treat each symptom as a token, and formalize the symptom inquiry and disease diagnosis to a language generation model and a sequence classification model respectively. We use the inverted version of Transformer, i.e., the decoder-encoder structure, to learn the representation of symptoms by jointly optimizing the reinforce reward and cross entropy loss. Extensive experiments on three public real-world datasets prove that our proposed model can effectively learn doctors' clinical experience and achieve the state-of-the-art results in terms of symptom recall and diagnostic accuracy.
translated by 谷歌翻译
In recent years, interest has arisen in using machine learning to improve the efficiency of automatic medical consultation and enhance patient experience. In this article, we propose two frameworks to support automatic medical consultation, namely doctor-patient dialogue understanding and task-oriented interaction. We create a new large medical dialogue dataset with multi-level finegrained annotations and establish five independent tasks, including named entity recognition, dialogue act classification, symptom label inference, medical report generation and diagnosis-oriented dialogue policy. We report a set of benchmark results for each task, which shows the usability of the dataset and sets a baseline for future studies. Both code and data is available from https://github.com/lemuria-wchen/imcs21.
translated by 谷歌翻译
以前的视觉语言预训练模型主要构建具有令牌和对象(像素)的多模式输入,然后在它们之间执行交叉模式相互作用。我们认为,只有令牌和对象的输入限制了诸如短语到区域接地之类的高级语义对齐。同时,多层次对齐本质上是一致的,并且能够协同促进表示形式学习。因此,在本文中,我们建议学习视觉预训练(MVPTR)的多级语义一致性。在MVPTR中,我们遵循两种方式的嵌套结构,以引入概念为高级语义。为了简化从多模式多级输入的学习,我们的框架分为两个阶段,第一阶段着重于模式内多级表示学习,第二阶段通过粗粒和细粒度跨模态强化了跨模式的交互语义对齐任务。除了常用的图像文本匹配和掩盖语言模型任务外,我们还引入了第一阶段蒙版概念恢复任务以增强概念表示学习,第二阶段的另外两个任务在第二阶段中,以明确鼓励跨跨层次的多层次对准方式。我们的代码可在https://github.com/junction4nako/mvp_pytorch上找到。
translated by 谷歌翻译
近年来,在挑战的多跳QA任务方面有令人印象深刻的进步。然而,当面对输入文本中的一些干扰时,这些QA模型可能会失败,并且它们进行多跳推理的可解释性仍然不确定。以前的逆势攻击作品通常编辑整个问题句,这对测试基于实体的多跳推理能力有限。在本文中,我们提出了一种基于多跳推理链的逆势攻击方法。我们将从查询实体开始的多跳推理链与构造的图表中的答案实体一起制定,这使我们能够将问题对齐到每个推理跳跃,从而攻击任何跃点。我们将问题分类为不同的推理类型和对应于所选推理跳的部分问题,以产生分散注意力的句子。我们在HotpotQA DataSet上的三个QA模型上测试我们的对抗方案。结果表明,对答案和支持事实预测的显着性能降低,验证了我们推理基于链条推理模型的攻击方法的有效性以及它们的脆弱性。我们的对抗重新培训进一步提高了这些模型的性能和鲁棒性。
translated by 谷歌翻译
视觉和语言导航(VLN)是一个任务,代理在人类指令下的体现室内环境中导航。以前的作品忽略了样本难度的分布,我们认为这可能会降低他们的代理表现。为了解决这个问题,我们为VLN任务提出了一种基于课程的基于课程的培训范式,可以平衡人类的先验知识和特工关于培训样本的学习进度。我们开发课程设计原则,并重新安排基准房间到室(R2R)数据集,以使其适用于课程培训。实验表明,我们的方法是模型 - 不可知的,可以显着提高当前最先进的导航剂的性能,概括性和培训效率而不会增加模型复杂性。
translated by 谷歌翻译
匹配模型对于图像文本检索框架至关重要。现有的研究通常用三重态丢失训练模型,并探索各种策略来检索数据集中的难度负句。我们认为,基于目前的检索的负样品施工方法在数据集的规模中受到限制,因此未能识别每个图像的高难度的负样本。我们提出了我们的裁缝消极句子与歧视和校正(标签-DC),以自动生成合成句作为负样本。标签-DC由掩模和重新填充,以产生具有更高难度的合成负句。为了保持训练期间的困难,我们通过参数共享相互改进检索和生成。为了进一步利用否定句子中的细粒度语义,我们提出了两个辅助任务,即词语歧视和单词校正,以改善培训。在实验中,与当前的最先进模型相比,我们验证了我们对MS-Coco和Flickr30K的模型的有效性,并在进一步的分析中展示了其鲁棒性和忠诚。我们的代码可用于https://github.com/libertfan/tags。
translated by 谷歌翻译
基于方面的情绪分析旨在确定产品评论中特定方面的情感极性。我们注意到,大约30%的评论不包含明显的观点词,但仍然可以传达清晰的人类感知情绪取向,称为隐含情绪。然而,最近的基于神经网络的方法几乎没有关注隐性情绪,这一审查有所关注。为了克服这个问题,我们通过域名语言资源检索的大规模情绪注释的Corpora采用监督对比培训。通过将隐式情感表达式的表示对准与具有相同情绪标签的人,预培训过程可以更好地捕获隐含和明确的情绪方向,以便在评论中的方面。实验结果表明,我们的方法在Semeval2014基准上实现了最先进的性能,综合分析验证了其对学习隐含情绪的有效性。
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译